Papua
If you can distinguish, you can express: Galois theory, Stone--Weierstrass, machine learning, and linguistics
Blum-Smith, Ben, Brugman, Claudia, Conners, Thomas, Villar, Soledad
This essay develops a parallel between the Fundamental Theorem of Galois Theory and the Stone--Weierstrass theorem: both can be viewed as assertions that tie the distinguishing power of a class of objects to their expressive power. We provide an elementary theorem connecting the relevant notions of "distinguishing power". We also discuss machine learning and data science contexts in which these theorems, and more generally the theme of links between distinguishing power and expressive power, appear. Finally, we discuss the same theme in the context of linguistics, where it appears as a foundational principle, and illustrate it with several examples.
- Asia > Indonesia > New Guinea > Western New Guinea > Papua (0.14)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- Oceania > Papua New Guinea (0.04)
- (9 more...)
Cross-Cultural Transfer of Commonsense Reasoning in LLMs: Evidence from the Arab World
Almheiri, Saeed, Hossam, Rania, Attia, Mena, Wang, Chenxi, Nakov, Preslav, Baldwin, Timothy, Koto, Fajri
Large language models (LLMs) often reflect Western-centric biases, limiting their effectiveness in diverse cultural contexts. Although some work has explored cultural alignment, the potential for cross-cultural transfer, using alignment in one culture to improve performance in others, remains underexplored. This paper investigates cross-cultural transfer of commonsense reasoning in the Arab world, where linguistic and historical similarities coexist with local cultural differences. Using a culturally grounded commonsense reasoning dataset covering 13 Arab countries, we evaluate lightweight alignment methods such as in-context learning and demonstration-based reinforcement (DITTO), alongside baselines like supervised fine-tuning and direct preference optimization. Our results show that merely 12 culture-specific examples from one country can improve performance in others by 10\% on average, within multilingual models. In addition, we demonstrate that out-of-culture demonstrations from Indonesia and US contexts can match or surpass in-culture alignment for MCQ reasoning, highlighting cultural commonsense transferability beyond the Arab world. These findings demonstrate that efficient cross-cultural alignment is possible and offer a promising approach to adapt LLMs to low-resource cultural settings.
- Africa > Middle East > Egypt (0.15)
- Europe > Austria > Vienna (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- (18 more...)
CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific Concepts
Nikandrou, Malvina, Pantazopoulos, Georgios, Vitsakis, Nikolas, Konstas, Ioannis, Suglia, Alessandro
As Vision and Language models (VLMs) become accessible across the globe, it is important that they demonstrate cultural knowledge. In this paper, we introduce CROPE, a visual question answering benchmark designed to probe the knowledge of culture-specific concepts and evaluate the capacity for cultural adaptation through contextual information. This allows us to distinguish between parametric knowledge acquired during training and contextual knowledge provided during inference via visual and textual descriptions. Our evaluation of several state-of-the-art open VLMs shows large performance disparities between culture-specific and common concepts in the parametric setting. Moreover, experiments with contextual knowledge indicate that models struggle to effectively utilize multimodal information and bind culture-specific concepts to their depictions. Our findings reveal limitations in the cultural understanding and adaptability of current VLMs that need to be addressed toward more culturally inclusive models.
- North America > Dominican Republic (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- Asia > Middle East > Jordan (0.04)
- (5 more...)
Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?
Aycock, Seth, Stap, David, Wu, Di, Monz, Christof, Sima'an, Khalil
Extremely low-resource (XLR) languages lack substantial corpora for training NLP models, motivating the use of all available resources such as dictionaries and grammar books. Machine Translation from One Book (Tanzer et al., 2024) suggests prompting long-context LLMs with one grammar book enables English-Kalamang translation, an unseen XLR language - a noteworthy case of linguistic knowledge helping an NLP task. We investigate whether the book's grammatical explanations or its parallel examples are most effective for learning XLR translation, finding almost all improvement stems from the parallel examples. Further, we find similar results for Nepali, a seen low-resource language, and achieve performance comparable to an LLM with a grammar book by simply fine-tuning an encoder-decoder translation model. We then investigate where grammar books help by testing two linguistic tasks, grammaticality judgment and gloss prediction, and we explore what kind of grammatical knowledge helps by introducing a typological feature prompt that achieves leading results on these more relevant tasks. We thus emphasise the importance of task-appropriate data for XLR languages: parallel examples for translation, and grammatical data for linguistic tasks. As we find no evidence that long-context LLMs can make effective use of grammatical explanations for XLR translation, we suggest data collection for multilingual XLR tasks such as translation is best focused on parallel data over linguistic description.
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (22 more...)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Synergetic Event Understanding: A Collaborative Approach to Cross-Document Event Coreference Resolution with Large Language Models
Min, Qingkai, Guo, Qipeng, Hu, Xiangkun, Huang, Songfang, Zhang, Zheng, Zhang, Yue
Cross-document event coreference resolution (CDECR) involves clustering event mentions across multiple documents that refer to the same real-world events. Existing approaches utilize fine-tuning of small language models (SLMs) like BERT to address the compatibility among the contexts of event mentions. However, due to the complexity and diversity of contexts, these models are prone to learning simple co-occurrences. Recently, large language models (LLMs) like ChatGPT have demonstrated impressive contextual understanding, yet they encounter challenges in adapting to specific information extraction (IE) tasks. In this paper, we propose a collaborative approach for CDECR, leveraging the capabilities of both a universally capable LLM and a task-specific SLM. The collaborative strategy begins with the LLM accurately and comprehensively summarizing events through prompting. Then, the SLM refines its learning of event representations based on these insights during fine-tuning. Experimental results demonstrate that our approach surpasses the performance of both the large and small language models individually, forming a complementary advantage. Across various datasets, our approach achieves state-of-the-art performance, underscoring its effectiveness in diverse scenarios.
- Asia > Singapore (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Indonesia > New Guinea > Western New Guinea > Papua (0.04)
- (18 more...)
An Open-Source Reproducible Chess Robot for Human-Robot Interaction Research
Zhang, Renchi, de Winter, Joost, Dodou, Dimitra, Seyffert, Harleigh, Eisma, Yke Bauke
Recent advancements in AI have sped up the evolution of versatile robot designs. Chess provides a standardized environment that allows for the evaluation of the influence of robot behaviors on human behavior. This article presents an open-source chess robot for humanrobot interaction (HRI) research, specifically focusing on verbal and non-verbal interactions. OpenChessRobot recognizes chess pieces using computer vision, executes moves, and interacts with the human player using voice and robotic gestures. We detail the software design, provide quantitative evaluations of the robot's efficacy and offer a guide for its reproducibility. Keywords: Artificial Intelligence, Chess, Human-robot Interaction, Open-source, Transfer Learning 1. Introduction Robots are becoming increasingly common across a variety of traditionally human-controlled domains. Examples range from automated mowers that maintain community lawns to robots in assembly lines and agricultural settings. Recent scientific advancements in AI have enabled new opportunities for intelligent sensing, reasoning, and acting by robots. In particular, the rapid development of large language models, such as ChatGPT, and vision-language models, have lowered the barrier of human-to-robot communication by being able to transform text and images into interpretable actions or vice versa. As technology advances, it is likely that robots will attain greater capabilities and will be able to tackle tasks previously within the exclusive realm of human expertise. This ongoing evolution may also lead to closer and more productive interactions between humans and robots. At the same time, integrating different AI-based robotic components remains a challenge, and the human-robot interaction (HRI) field lags in terms of endorsing reproducibility principles (Gunes et al., 2022). Encouraging transparent and reproducible research, therefore, remains an ongoing task. Furthermore, chess has played an important role in advancing the field of AI, starting with Claude Shannon's chess-playing algorithm (Shannon, 1950) to the success of IBM's Deep Blue (Campbell et al., 2002) and DeepMind's self-play learning algorithm (Silver et al., 2018). In this paper, we incorporate modern AI algorithms into the design of a chess-playing robot to be used for studying HRI. HRI research may benefit from a chess-based setup because the game of chess provides a controlled rule-based environment in which the impact of robots on human players can be precisely measured.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- (15 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Robots > Humanoid Robots (0.91)
- Information Technology > Artificial Intelligence > Games > Chess (0.91)
A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution
Ding, Bowen, Min, Qingkai, Ma, Shengkun, Li, Yingjie, Yang, Linyi, Zhang, Yue
Based on Pre-trained Language Models (PLMs), event coreference resolution (ECR) systems have demonstrated outstanding performance in clustering coreferential events across documents. However, the state-of-the-art system exhibits an excessive reliance on the'triggers lexical matching' spurious pattern in the input mention pair text. We formalize the decision-making process of the baseline ECR system using a Structural Causal Model (SCM), aiming to identify spurious and causal associations (i.e., rationales) within the ECR task. Leveraging the debiasing capability of counterfactual data augmentation, we develop a rationale-centric counterfactual data augmentation method with LLM-in-the-loop. This method is specialized for pairwise input in the Figure 1: The distribution of'triggers lexical matching' ECR system, where we conduct direct interventions in mention pairs from ECB+ training set, along with a on triggers and context to mitigate the false negative example from Held et al.'s system which spurious association while emphasizing the causation.
- North America > United States > Missouri > Jackson County > Kansas City (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Indiana > Marion County > Indianapolis (0.04)
- (28 more...)
- Research Report (1.00)
- Personal > Obituary (1.00)
- Leisure & Entertainment > Sports > Football (1.00)
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment > Sports > Soccer (0.92)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
IndoCulture: Exploring Geographically-Influenced Cultural Commonsense Reasoning Across Eleven Indonesian Provinces
Koto, Fajri, Mahendra, Rahmad, Aisyah, Nurul, Baldwin, Timothy
Although commonsense reasoning is greatly shaped by cultural and geographical factors, previous studies on language models have predominantly centered on English cultures, potentially resulting in an Anglocentric bias. In this paper, we introduce IndoCulture, aimed at understanding the influence of geographical factors on language model reasoning ability, with a specific emphasis on the diverse cultures found within eleven Indonesian provinces. In contrast to prior works that relied on templates (Yin et al., 2022) and online scrapping (Fung et al., 2024), we created IndoCulture by asking local people to manually develop the context and plausible options based on predefined topics. Evaluations of 23 language models reveal several insights: (1) even the best open-source model struggles with an accuracy of 53.2%, (2) models often provide more accurate predictions for specific provinces, such as Bali and West Java, and (3) the inclusion of location contexts enhances performance, especially in larger models like GPT-4, emphasizing the significance of geographical context in commonsense reasoning.
- Health & Medicine (0.68)
- Education (0.47)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Commonsense Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes
Bhutani, Mukul, Robinson, Kevin, Prabhakaran, Vinodkumar, Dave, Shachi, Dev, Sunipa
While generative multilingual models are rapidly being deployed, their safety and fairness evaluations are largely limited to resources collected in English. This is especially problematic for evaluations targeting inherently socio-cultural phenomena such as stereotyping, where it is important to build multi-lingual resources that reflect the stereotypes prevalent in respective language communities. However, gathering these resources, at scale, in varied languages and regions pose a significant challenge as it requires broad socio-cultural knowledge and can also be prohibitively expensive. To overcome this critical gap, we employ a recently introduced approach that couples LLM generations for scale with culturally situated validations for reliability, and build SeeGULL Multilingual, a global-scale multilingual dataset of social stereotypes, containing over 25K stereotypes, spanning 20 languages, with human annotations across 23 regions, and demonstrate its utility in identifying gaps in model evaluations. Content warning: Stereotypes shared in this paper can be offensive.
Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Elazar, Yanai, Kassner, Nora, Ravfogel, Shauli, Feder, Amir, Ravichander, Abhilasha, Mosbach, Marius, Belinkov, Yonatan, Schütze, Hinrich, Goldberg, Yoav
Large amounts of training data are one of the major reasons for the high performance of state-of-the-art NLP models. But what exactly in the training data causes a model to make a certain prediction? We seek to answer this question by providing a language for describing how training data influences predictions, through a causal framework. Importantly, our framework bypasses the need to retrain expensive models and allows us to estimate causal effects based on observational data alone. Addressing the problem of extracting factual knowledge from pretrained language models (PLMs), we focus on simple data statistics such as co-occurrence counts and show that these statistics do influence the predictions of PLMs, suggesting that such models rely on shallow heuristics. Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > Middle East > Republic of Türkiye > Ankara Province > Ankara (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (18 more...)
- Leisure & Entertainment (0.93)
- Media > Television (0.68)
- Health & Medicine (0.67)